#AnThroM Experiment 1: testing the relation between dispositional anthropomorphism and Theory-of-Mind network activation
by Ruud Hortensius and Michaela Kent (University of Glasgow) - June 2019 - …
Data of the Theory-of-Mind functional localiser and Individual Differences in Anthropomorphism Questionnaire are from five different studies.
Dataset_1: Bangor Imaging Unit; EMBOTS; n=29 (including 1 pilot scan); full dataset and publication: Cross…Hortensius (2019) PTRB.
Dataset_2: Centre for Cognitive NeuroImaging; SHAREDBOTS; n=35 (including 2 pilot scans) publication: Hortensius & Cross, in preparation.
Dataset_3: Centre for Cognitive NeuroImaging; Two studies with the same parameters: n=22 (including 2 pilot scans). Social_Gradient_1; n=10 (pilot experiment) and BOLDlight; n=12.
Dataset_4: Centre for Cognitive NeuroImaging; GAMEBOTS; n=22.
Get info for table S1:
library("tidyverse")
#load own data
DF.dataset1 <- read_tsv(file = "/Volumes/Project0255/dataset_1/participants.tsv")
DF.dataset2 <- read_tsv(file = "/Volumes/Project0255/dataset_2/participants.tsv")
DF.dataset3 <- read_tsv(file = "/Volumes/Project0255/dataset_3/participants.tsv")
DF.dataset4 <- read_tsv(file = "/Volumes/Project0255/dataset_4/participants.tsv")
#combine data
bind_rows(DF.dataset1, DF.dataset2, DF.dataset3, DF.dataset4, .id = "dataset") %>%
group_by(dataset) %>%
summarise(mean = mean(age),
sd = sd(age))
bind_rows(DF.dataset1, DF.dataset2, DF.dataset3, DF.dataset4, .id = "dataset") %>%
group_by(dataset, sex) %>%
tally()
All participants completed a Theory-of-Mind localiser (Jacoby et al., 2016; Richardson et al. 2018) and an anatomical scan either in the same session or in two seperate sessions. During the localiser participants passively viewed a short 5.6 min animated film (Partly Cloudy). This movie includes scenes depicting pain (e.g. an alligator biting the main character) and events that trigger mentalizing (e.g. the main character revealing its intention). For dataset_3 and dataset_4 a fieldmap was collected as well. At the end of each experiment participants completed the Individual Differences in Anthropomorphism Questionnaire (IDAQ) (Waytz et al., 2010).
BOLD:
Dataset_1: 3x3x3.5mm voxels, 32 slices, repetition time = 2s, echo time = 30ms
Dataset_2: 3mm isotropic, 37 slices, TR = 2s, TE = 30ms
Dataset_3: 2mm isotropic, 68 slices, TR = 2s, TE = 26ms
Dataset_4: 2.75 x 2.75 x 4mm, 32 slices, TR = 2s, TE = 13 and 31ms
T1W:
Dataset_1: 1mm isotropic resolution, TR = 12ms, TE = 3.47 / 5.15 / 6.83 / 8.52 / 10.20ms (SENSE)
Dataset_2 - 4: 1mm isotropic resolution, TR = 2.3s, TE = 29.6ms (ADNI)
Fieldmaps:
Dataset_1: no, so –use-syn-sdc
Dataset_2: no, so –use-syn-sdc
Dataset_3: yes
Dataset_4: yes
Note: for the code chunk the language is listed, but all except for r-chunks are executed in the terminal
For this you need HeuDiConv Heuristic DICOM Converter.
Based on the tutorial by Franklin Feingold.
Dowload the latest version of Heudiconv (we used 0.6.0.dev1):
docker pull nipy/heudiconv:latest
If on the GRID do:
singularity pull docker://nipy/heudiconv:latest
Create the info file (dataset_2 - 4):
docker run --rm -it -v /Volumes/Project0255/:/base nipy/heudiconv:latest -d /base/dataset_3/sourcedata/sub-{subject}/*.IMA -o /base/dataset_3 -f convertall -s 315 -c none --overwrite
For dataset_1 we first need to convert the .dcm from jpeg-2000 lossless to uncompressed dcm (thanks to Michele Svanera for the code):
python3 convert_all_compressed_dicom.py
Create the info file (dataset_1):
docker run --rm -it -v /Volumes/Project0255/:/base nipy/heudiconv:latest -d /base/dataset_1/sourcedata/sub-{subject}/ses-{session}/*.dcm -o /base/dataset_1 -f convertall -s 129 -ss 01 -c none --overwrite
Get the info file:
cp /Volumes/Project0255/code/.heudiconv/301/info/dicominfo.tsv /Volumes/Project0255/code
Create the following python file and save it in the code folder. There is one functional task (func_movie) and one anatomical (t1w). Dataset_3 and 4 have a field map as well (fmap_phase and fmap_magnitude)
Create a heuristic to automatically convert the files:
import os
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
session: session id (only for dataset_1)
"""
t1w1 = create_key('sub-{subject}/{session}/anat/sub-{subject}_{session}_T1w')
func_movie1 = create_key('sub-{subject}/{session}/func/sub-{subject}_{session}_task-movie_bold')
t1w = create_key('sub-{subject}/anat/sub-{subject}_T1w')
func_movie = create_key('sub-{subject}/func/sub-{subject}_task-movie_bold')
func_movie_echo_1 = create_key('sub-{subject}/func/sub-{subject}_task-movie_echo-1_bold')
func_movie_echo_2 = create_key('sub-{subject}/func/sub-{subject}_task-movie_echo-2_bold')
fmap_phase = create_key('sub-{subject}/fmap/sub-{subject}_phasediff')
fmap_magnitude = create_key('sub-{subject}/fmap/sub-{subject}_magnitude')
info = {t1w1: [], func_movie1: [], t1w: [], func_movie: [], fmap_phase: [], fmap_magnitude: [],
func_movie_echo_1: [], func_movie_echo_2: []}
for idx, s in enumerate(seqinfo):
if ('T1W_1mm_sag SENSE' in s.protocol_name):
info[t1w1].append(s.series_id)
if ('ToM_PartlyCloudy SENSE' in s.protocol_name):
info[func_movie1].append(s.series_id)
if ('t1_mpr_ns_sag_iso_ADNI_32ch' in s.protocol_name):
info[t1w].append(s.series_id)
if ('t1_mpr_ns_sag_P2_ADNI_32ch' in s.protocol_name):
info[t1w].append(s.series_id)
if (s.dim4 == 175) and ('FMRI_MB2_p2_2MMISO_TR2_movie' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.dim4 == 175) and ('FMRI_MB2_movie_p2_2MMISO_TR2' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.dim4 == 170) and ('ep2d_ToM_Loc' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.dim4 == 175) and ('ep2d_ToM_Loc' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.dim4 == 175) and ('ep2d_ToM_Loc_boldTR2' in s.protocol_name):
info[func_movie].append(s.series_id)
if (s.TE == 13) and ('BP_ep2d_multiecho_32ch_p3_TOM' in s.protocol_name):
info[func_movie_echo_1].append(s.series_id)
if (s.TE == 31.36) and ('BP_ep2d_multiecho_32ch_p3_TOM' in s.protocol_name):
info[func_movie_echo_2].append(s.series_id)
if (s.dim3 == 92) and ('gre_field_mapping_AAH' in s.protocol_name):
info[fmap_magnitude].append(s.series_id)
if (s.dim3 == 46) and ('gre_field_mapping_AAH' in s.protocol_name):
info[fmap_phase].append(s.series_id)
if (s.dim3 == 64) and ('gre_field_mapping_AAH' in s.protocol_name):
info[fmap_magnitude].append(s.series_id)
if (s.dim3 == 32) and ('gre_field_mapping_AAH' in s.protocol_name):
info[fmap_phase].append(s.series_id)
return info
Use the heuristic file to convert the Dicom files to .nii.gz (nifti) and create .json files:
docker run --rm -it -v /Volumes/Project0255/:/base nipy/heudiconv:latest -d /base/dataset_4/sourcedata/sub-{subject}/*.IMA -o /base/dataset_4 -f /base/code/heuristic_anthrom.py -s 401 -c dcm2niix -b --overwrite
For dataset_1 (for dataset_1 add ses-{session}/ and –ss 01 and .dcm). Movie for sub-101 and 102 is in ses-02:
docker run --rm -it -v /Volumes/Project0255/:/base nipy/heudiconv:latest -d /base/dataset_1/sourcedata/sub-{subject}/ses-{session}/*.dcm -o /base/dataset_1 -f /base/code/heuristic_anthrom.py -s 121 -ss 02 -c dcm2niix -b --overwrite
On the grid do (Sub-116 was done manually in dcm2niigui):
Type in bash before running
Dataset_1:
singularity run -B /analyse/Project0255/:/base /analyse/Project0255/my_images/heudiconv_latest.sif -d /base/dataset_1/sourcedata/sub-{subject}/ses-{session}/*.dcm -o /base/dataset_1/ -f /base/code/heuristic_anthrom.py -s 116 -ss 01 -c dcm2niix -b --overwrite
Dataset_2 - 4:
singularity run -B /analyse/Project0255/:/base /analyse/Project0255/my_images/heudiconv_latest.sif -d /base/dataset_2/sourcedata/sub-{subject}/*.IMA -o /base/dataset_2/ -f /base/code/heuristic_anthrom.py -s 201 -c dcm2niix -b --overwrite
Deface using Pydeface:
#!/bin/bash
set -e
####For loop that defaces the MRI per subject and replaces the old MRI with the new defaced MRI
rootfolder=/Volumes/Project0255/dataset_4
for subj in 401; do
echo "Defacing participant $subj"
pydeface ${rootfolder}/sub-${subj}/anat/sub-${subj}_T1w.nii.gz
rm -f ${rootfolder}/sub-${subj}/anat/sub-${subj}_T1w.nii.gz
mv ${rootfolder}/sub-${subj}/anat/sub-${subj}_T1w_defaced.nii.gz ${rootfolder}/sub-${subj}/anat/sub-${subj}_T1w.nii.gz
done
For dataset_1: ses-01: 101 102 103 107 112 113 117 118 119 122 123 124 128 ses-02: 104 105 106 108 109 110 111 115 116 120 121 125 126 127
#!/bin/bash
set -e
rootfolder=/Volumes/Project0255/dataset_1
for subj in 129; do
echo "Defacing participant $subj"
for session in 01; do
for echo in 1 2 3 4 5; do
pydeface ${rootfolder}/sub-${subj}/ses-${session}/anat/sub-${subj}_ses-${session}_echo-${echo}_T1w.nii.gz
rm -f ${rootfolder}/sub-${subj}/ses-${session}/anat/sub-${subj}_ses-${session}_echo-${echo}_T1w.nii.gz
mv ${rootfolder}/sub-${subj}/ses-${session}/anat/sub-${subj}_ses-${session}_echo-${echo}_T1w_defaced.nii.gz ${rootfolder}/sub-${subj}/ses-${session}/anat/sub-${subj}_ses-${session}_echo-${echo}_T1w.nii.gz
done
done
done
You need to specify “IntendedFor” field in the _phasediff.json files to point which scans the estimated fieldmap should be applied to.
Run the following script (thanks to Michele Svanera for the code):
python change_json.py
For dataset_4 we need to combine the two echo’s (see NeuroStar for more info. We created a dual_sum volume by adding the two images together (see Halai et al. 2014.
Run the following script (thanks to Tyler Morgan for the code):
python sum_echo.py
Create tsv file for functional localiser. Event coding (in s; 10s of fixation before movie starts; accounting for hemodynamic lag) is based on Richardson et al. 2018 - reverse correlation analyses.
Note: For sub-322 the trigger was at the start of the movie (thus create a different tsv, with event - 10s). Check the triggers for dataset_1.
PartlyCloudy <- data.frame(onset = c(86, 98, 120, 176, 238, 252, 300, 70, 92, 106, 136, 194, 210, 228, 262, 312), #create the events (same for every sub)
duration = c(4, 6, 4, 16, 6, 8, 6, 4, 2, 4, 10, 4, 12, 6, 6, 4),
trial_type = c(rep("mental",7), rep("pain",9)))
#dataset_1
for (sub in 102:129){ #note: localisers for sub-101 are in ses-02
filename = paste("/Volumes/Project0255/dataset_1/sub-", sub, "/ses-01/func/sub-", sub, "_ses-01_task-movie_events.tsv", sep ="")
write.table(PartlyCloudy, file = filename, sep="\t", row.names = FALSE, quote = FALSE)
}
#dataset_2
for (sub in 201:235){
filename = paste("/Volumes/Project0255/dataset_2/sub-", sub, "/func/sub-", sub, "_task-movie_events.tsv", sep ="")
write.table(PartlyCloudy, file = filename, sep="\t", row.names = FALSE, quote = FALSE)
}
#dataset_3
for (sub in 301:322){ #note: localisers for sub-322 should have t-10 (no trigger) <-manually correct this
filename = paste("/Volumes/Project0255/dataset_3/sub-", sub, "/func/sub-", sub, "_task-movie_events.tsv", sep ="")
write.table(PartlyCloudy, file = filename, sep="\t", row.names = FALSE, quote = FALSE)
}
#dataset_4
for (sub in 401:422){
filename = paste("/Volumes/Project0255/dataset_4/sub-", sub, "/func/sub-", sub, "_task-movie_events.tsv", sep ="")
write.table(PartlyCloudy, file = filename, sep="\t", row.names = FALSE, quote = FALSE)
}
Use the BIDS-Validator to check if the dataset is BIDS compliant:
docker run -ti --rm -v /Volumes/Project0255/dataset_4:/data:ro bids/validator /data
MRIQC is a docker tool to do quality control of the data. More info here.
MRIQC 0.14.2 was used:
docker run -it --rm -v /Volumes/Project0255/dataset_1/:/data:ro -v /Volumes/Project0255/dataset_1/derivatives/mriqc:/out poldracklab/mriqc:0.14.2 /data /out participant --participant-label 101 -m T1w bold --ica --fft-spikes-detector
On the grid do (cd in /analyse folder):
singularity run --cleanenv /analyse/Project0255/my_images/mriqc-0.14.2.simg /analyse/Project0255/dataset_1 /analyse/Project0255/dataset_1/derivatives/mriqc participant --participant-label 123 -m T1w bold --ica --fft-spikes-detector -w /analyse/Project0255/work
Run it seperately for the datasets. Change participant to group to create the group reports:
docker run -it --rm -v /Volumes/Project0255/dataset_4/:/data:ro -v /Volumes/Project0255/dataset_4/derivatives/mriqc:/out poldracklab/mriqc:0.14.2 /data /out group
Plot the output. This is based on MRIQCeption. The MRIQCeption Visualization by Catherine Walsh was adapted. Adjust the filter if you want to look at different measures.
Adjust this to your liking (e.g. bold: fd_mean, fd_perc, dvars_std, dvars_vstd, gcor, tsnr, t1w: cjv, cnr, snr, efc, inu, wm2max, fwhm) and modality (bold or t1w):
QCmeasure <- "fd_mean"
modality <- "bold"
Run the following code. Change the script below to load the group results for the different datasets:
#libraries
library("tidyverse")
source("/Volumes/Project0255/code/R_rainclouds.R")
#load own data
DF.dataset1 <- read_tsv(file = paste("/Volumes/Project0255/dataset_1/derivatives/mriqc/group_", modality, ".tsv", sep ="")) %>%
gather("measure", "value", 2:46) %>%
select("bids_name","measure", "value")
DF.dataset2 <- read_tsv(file = paste("/Volumes/Project0255/dataset_2/derivatives/mriqc/group_", modality, ".tsv", sep ="")) %>%
gather("measure", "value", 2:46) %>%
select("bids_name","measure", "value")
DF.dataset3 <- read_tsv(file = paste("/Volumes/Project0255/dataset_3/derivatives/mriqc/group_", modality, ".tsv", sep ="")) %>%
gather("measure", "value", 2:46) %>%
select("bids_name","measure", "value")
DF.dataset4 <- read_tsv(file = paste("/Volumes/Project0255/dataset_4/derivatives/mriqc/group_", modality, ".tsv", sep ="")) %>%
gather("measure", "value", 2:46) %>%
select("bids_name","measure", "value")
#select the most relevant measures
#selectionMeasure <- c("snr", "tsnr", "efc", "fber", "gsr_x", "gsr_y", "dvars_nstd", "dvars_std", "dvars_vstd", "gcor", "fd_mean", "fd_number", "fd_percentage", "spikes", "aor", "aqi")
#combine data
DF.full <- bind_rows(DF.dataset1, DF.dataset2, DF.dataset3, DF.dataset4, .id = "dataset") %>%
group_by(dataset) %>%
filter(measure == QCmeasure) #%in% c(selectionMeasure))
#create raincloud plot (check out the [github](https://github.com/RainCloudPlots/) or [preprint](https://wellcomeopenresearch.org/articles/4-63/v1)
p <- ggplot(DF.full,aes(x=dataset,y=value,fill=dataset))+
geom_flat_violin(position=position_nudge(x = .2, y = 0),adjust =2, trim = FALSE, alpha = .5, colour = NA)+
geom_point(aes(colour = dataset), position=position_jitter(width = .05), size = .5, shape = 20)+
geom_boxplot(aes(x=dataset,y=value),position=position_nudge(x = .1, y = 0),outlier.shape = NA, alpha = .5, width = .1, colour = "black")+
#facet_wrap(. ~ dataset) +
theme_classic() + ylab(QCmeasure) + scale_fill_brewer(palette = "Reds") +
scale_colour_brewer(palette = "Reds") + ggtitle(paste("Comparison of", modality, "QC measure", QCmeasure, "between datasets")) +
facet_wrap(~measure)
p
fMRIprep is a docker tool for preprocessing of the fMRI data. More info here
fMRIprep version 1.5.2 was used on a local iMac.
If you run into memory problems you can use –skip_bids_validation; skipped the –write-graph flag to save space, and –use-syn-sdc only for dataset_1 and datatset_2.
If run on the GRID, cd into the analyse folder and run:
singularity run --cleanenv /analyse/Project0255/my_images/fmriprep-1.5.2.simg /analyse/Project0255/dataset_1/ /analyse/Project0255/dataset_1/derivatives participant --participant-label sub-129 --fs-license-file /analyse/Project0255/my_images/license.txt --skip_bids_validation --use-syn-sd --fs-no-reconall -w /analyse/Project0255/work/compute00
Resize functional files for two participants (sub-117 and sub-125) from dataset_1 (sub-{sub}_ses-01_task-movie_space-MNI152NLin2009cAsym_desc-preproc_bold.nii) to allow for group comparison (run this in MATLAB):
voxsiz = [3 3 3.5]; % new voxel size {mm}
V = spm_select([1 Inf],'image');
V = spm_vol(V);
for i=1:numel(V)
bb = spm_get_bbox(V(i));
VV(1:2) = V(i);
VV(1).mat = spm_matrix([bb(1,:) 0 0 0 voxsiz])*spm_matrix([-1 -1 -1]);
VV(1).dim = ceil(VV(1).mat \ [bb(2,:) 1]' - 0.1)';
VV(1).dim = VV(1).dim(1:3);
spm_reslice(VV,struct('mean',false,'which',1,'interp',0)); % 1 for linear
end
Example MATLAB script (dataset 3):
%========================================================================
% SPM first-level analysis for fmriprep data in BIDS format
%========================================================================
% This script is written by Ruud Hortensius and Michaela Kent
% (University of Glasgow). Based upon a script written by
% Shengdong Chen (ACRLAB) and Stephan Heunis (TU Eindhoven).
%
% Added: loop for runs
% Parameters as specified by Saxelab: https://saxelab.mit.edu/theory-mind-and-pain-matrix-localizer-movie-viewing-experiment
%
% Last updated: January 2020
%========================================================================
clear all
%% Inputdirs
BIDS = spm_BIDS('/Volumes/Project0255/dataset_3/'); % Parse BIDS directory (easier to query info from dataset)
BIDSpreproc=fullfile(BIDS.dir,'derivatives/fmriprep'); % get the preprocessed directory
%sublist = spm_BIDS(BIDS,'subjects') %number of subjects
sublist = transpose(BIDS.participants.participant_id) %get subject list including the 'sub'
subex = []
sublist(subex) = []; %update the subjects
taskid='movie'; %specify the task to be analysed
numScans=175; %The number of volumes per run <---
TR = 2; % Repetition time, in seconds <---
unit='secs'; % onset times in secs (seconds) or scans (TRs)
%% Outputdirs
outputdir=fullfile(BIDS.dir,'derivatives/bids_spm/first_level'); % root outputdir for sublist
spm_mkdir(outputdir,char(sublist), char(taskid)); % create output directory
%% Loop for sublist
spm('Defaults','fMRI'); %Initialise SPM fmri
spm_jobman('initcfg'); %Initialise SPM batch mode
for i=1:length(sublist)
%% Output dirs where you save SPM.mat
subdir=fullfile(outputdir,sublist{i},taskid);
%% Basic parameters
matlabbatch{1}.spm.stats.fmri_spec.dir = {subdir};
matlabbatch{1}.spm.stats.fmri_spec.timing.units = unit; % specified above
matlabbatch{1}.spm.stats.fmri_spec.timing.RT = TR; % specified above
matlabbatch{1}.spm.stats.fmri_spec.timing.fmri_t = 68; %<--- look into this
matlabbatch{1}.spm.stats.fmri_spec.timing.fmri_t0 = 34; %<--- look into this
%% Load input files for task specilized
sub_inputdir=fullfile(BIDSpreproc,sublist{i},'func');
sub_inputdirA=fullfile(BIDSpreproc,sublist{i},'anat');
%------------------------------------------------------------------
func=[sub_inputdir,filesep,sublist{i},'_task-',taskid,'_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz'];
func_nii=[sub_inputdir,filesep,sublist{i}, '_task-',taskid,'_space-MNI152NLin2009cAsym_desc-preproc_bold.nii'];
if ~exist(func_nii,'file'), gunzip(func)
end
run_scans = spm_select('Expand',func_nii);
matlabbatch{1}.spm.stats.fmri_spec.sess(1).scans = cellstr(run_scans);
% Load the condition files
events = spm_load([BIDS.dir,filesep,sublist{i},'/func/', sublist{i},'_task-',taskid,'_events.tsv']) %load TSV condition file
names{1} = 'mental';
t = strcmp(names{1}, events.trial_type)
onsets{1} = transpose(events.onset(t));
durations{1} = transpose(events.duration(t));
names{2} = 'pain';
t = strcmp(names{2}, events.trial_type)
onsets{2} = transpose(events.onset(t));
durations{2} = transpose(events.duration(t));
file_mat = [subdir,filesep,sublist{i},'_task-',taskid,'_conditions.mat'];
save(file_mat, 'names', 'onsets', 'durations')
matlabbatch{1}.spm.stats.fmri_spec.sess(1).cond = struct('name', {}, 'onset', {}, 'duration', {}, 'tmod', {}, 'pmod', {}, 'orth', {});
matlabbatch{1}.spm.stats.fmri_spec.sess(1).multi = {file_mat};
% Confounds file
confounds=spm_load([sub_inputdir,filesep,sublist{i},'_task-',taskid,'_desc-confounds_regressors.tsv']) ;
confounds_matrix=[confounds.framewise_displacement, confounds.a_comp_cor_00,confounds.a_comp_cor_01,confounds.a_comp_cor_02,confounds.a_comp_cor_03, confounds.a_comp_cor_04,confounds.a_comp_cor_05, confounds.trans_x, confounds.trans_y, confounds.trans_z, confounds.rot_x, confounds.rot_y, confounds.rot_z];
confounds_name=[subdir,filesep,sublist{i},'_task-',taskid,'_acomcorr.txt'];
confounds_matrix(isnan(confounds_matrix)) = 0 % nanmean(confounds_matrix); %check this <-----
if ~exist(confounds_name,'file'), dlmwrite(confounds_name,confounds_matrix)
end
matlabbatch{1}.spm.stats.fmri_spec.sess(1).multi_reg = {confounds_name};
matlabbatch{1}.spm.stats.fmri_spec.sess(1).hpf = 128; % High-pass filter (hpf) without using consine
%% Model (Default)
matlabbatch{1}.spm.stats.fmri_spec.fact = struct('name', {}, 'levels', {});
matlabbatch{1}.spm.stats.fmri_spec.bases.hrf.derivs = [0 0];
matlabbatch{1}.spm.stats.fmri_spec.volt = 1;
matlabbatch{1}.spm.stats.fmri_spec.global = 'Scaling';
mask=[sub_inputdirA,filesep,sublist{i},'_space-MNI152NLin2009cAsym_label-GM_probseg.nii.gz'];
mask_nii=[sub_inputdirA,filesep,sublist{i},'_space-MNI152NLin2009cAsym_label-GM_probseg.nii'];
if ~exist(mask_nii,'file'), gunzip(mask)
end
mask_nii=[mask_nii, ',1']
matlabbatch{1}.spm.stats.fmri_spec.mask = {mask_nii};
matlabbatch{1}.spm.stats.fmri_spec.mthresh = 0.8;
matlabbatch{1}.spm.stats.fmri_spec.cvi = 'none';
%% Model estimation (Default)subdir
matlabbatch{2}.spm.stats.fmri_est.spmmat = {[subdir filesep 'SPM.mat']};
matlabbatch{2}.spm.stats.fmri_est.write_residuals = 0;
matlabbatch{2}.spm.stats.fmri_est.method.Classical = 1;
%% Contrasts
matlabbatch{3}.spm.stats.con.spmmat = {[subdir filesep 'SPM.mat']};
% Set contrasts of interest.
matlabbatch{3}.spm.stats.con.consess{1}.tcon.name = 'mental_pain';
matlabbatch{3}.spm.stats.con.consess{1}.tcon.convec = [1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0];
matlabbatch{3}.spm.stats.con.consess{2}.tcon.name = 'pain_mental';
matlabbatch{3}.spm.stats.con.consess{2}.tcon.convec = [-1 1 0 0 0 0 0 0 0 0 0 0 0 0 0];
matlabbatch{3}.spm.stats.con.delete = 0;
%% Run matlabbatch jobs
spm_jobman('run',matlabbatch);
end
Run the followin commands in the terminal.
Dataset_1:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset1"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset1_ppn101"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset1_ppn114"
Dataset_2:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset2_ppn201_202"
Dataset_3:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset3"
Dataset_4:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_firstlevel_tom_dataset4"
Create a group average for the GM_probseg.nii for each dataset in Matlab (change the code per dataset; run this in MATLAB):
clear all
spm('Defaults','fMRI');
spm_jobman('initcfg');
BIDS = spm_BIDS('/Volumes/Project0255/dataset_3'); %change this
BIDSfirst=fullfile(BIDS.dir,'derivatives/fmriprep');
sublist = transpose(BIDS.participants.participant_id)
subex = [] %subjects that don't have an anatomical (14 dataset_1)
sublist(subex) = [];
for i=1:length(sublist)
subdir=fullfile(BIDSfirst,sublist{i}, 'anat')
matlabbatch{1}.spm.util.imcalc.input{i,1} = [subdir, filesep, sublist{i}, '_space-MNI152NLin2009cAsym_label-GM_probseg.nii,1']
end
matlabbatch{1}.spm.util.imcalc.output = 'dataset3_averageGM';
matlabbatch{1}.spm.util.imcalc.outdir = {'/Volumes/Project0255/dataset_3/derivatives/fmriprep'}; %change this
matlabbatch{1}.spm.util.imcalc.expression = 'mean(X)';
matlabbatch{1}.spm.util.imcalc.var = struct('name', {}, 'value', {});
matlabbatch{1}.spm.util.imcalc.options.dmtx = 1;
matlabbatch{1}.spm.util.imcalc.options.mask = 0;
matlabbatch{1}.spm.util.imcalc.options.interp = 1;
matlabbatch{1}.spm.util.imcalc.options.dtype = 4;
spm_jobman('run',matlabbatch);
Example MATLAB script (dataset 3):
%========================================================================
% SPM second-level analysis for fmriprep data in BIDS format
%========================================================================
% This script is written by Ruud Hortensius and Michaela Kent
% (University of Glasgow)
%
% Last updated: January 2020
%========================================================================
clear all
%% Inputdirs
BIDS = spm_BIDS('/Volumes/Project0255/dataset_3'); % Parse BIDS directory (easier to query info from dataset)
BIDSfirst=fullfile(BIDS.dir,'derivatives/bids_spm/first_level'); % get the first-level directory
sublist = transpose(BIDS.participants.participant_id) %get subject list including the 'sub'
subex = [] %subjects that don't have a second-session
sublist(subex) = []; %update the subjects
%nsession = spm_BIDS(BIDS,'sessions') %how many sessions? careful, sometimes collapsing across sessions not wanted
%sessionid = 'ses-01' %get session id
taskid='movieHC'; %specify the task to be analysed
contrast='con_0001'; %specify the contrast to be analysed
contrast_name='mental_hc'; %specify the name of the contrast
smoothing = 1; %soomthing of first-level contrasts (1=yes, 0=no)
s_kernel = [5 5 5]
%% Outputdirs
outputdir=fullfile(BIDS.dir,'derivatives/bids_spm/second_level', char(contrast_name)); % root outputdir for sublist
spm_mkdir(outputdir); % create output directory
spm('Defaults','fMRI'); %Initialise SPM fmri
spm_jobman('initcfg'); %Initialise SPM batch mode
%% Smoothing of first-level contrasts
if smoothing == 1
for i=1:length(sublist)
subdir=fullfile(BIDSfirst,sublist{i}, taskid);
matlabbatch{1}.spm.spatial.smooth.data{i,1} = [subdir, filesep, contrast, '.nii,1'];
matlabbatch{1}.spm.spatial.smooth.fwhm = s_kernel;
matlabbatch{1}.spm.spatial.smooth.dtype = 0;
matlabbatch{1}.spm.spatial.smooth.im = 0;
matlabbatch{1}.spm.spatial.smooth.prefix = 's';
end
spm_jobman('run',matlabbatch);
clear matlabbatch
end
%% Load the contrasts
matlabbatch{1}.spm.stats.factorial_design.dir = {outputdir};
for i=1:length(sublist)
subdir=fullfile(BIDSfirst,sublist{i}, taskid);
if smoothing == 1
matlabbatch{1,1}.spm.stats.factorial_design.des.t1.scans{i,1} = [subdir, filesep, 's', contrast, '.nii,1']
else
matlabbatch{1,1}.spm.stats.factorial_design.des.t1.scans{i,1} = [subdir, filesep, contrast, '.nii,1']
end
end
matlabbatch{1}.spm.stats.factorial_design.cov = struct('c', {}, 'cname', {}, 'iCFI', {}, 'iCC', {});
matlabbatch{1}.spm.stats.factorial_design.multi_cov = struct('files', {}, 'iCFI', {}, 'iCC', {});
matlabbatch{1}.spm.stats.factorial_design.masking.tm.tm_none = 1;
matlabbatch{1}.spm.stats.factorial_design.masking.im = 1;
matlabbatch{1}.spm.stats.factorial_design.masking.em = {''};
matlabbatch{1}.spm.stats.factorial_design.globalc.g_omit = 1;
matlabbatch{1}.spm.stats.factorial_design.globalm.gmsca.gmsca_no = 1;
matlabbatch{1}.spm.stats.factorial_design.globalm.glonorm = 1;
%% Model estimation
matlabbatch{2}.spm.stats.fmri_est.spmmat = {[outputdir filesep 'SPM.mat']};
matlabbatch{2}.spm.stats.fmri_est.write_residuals = 0;
matlabbatch{2}.spm.stats.fmri_est.method.Classical = 1;
%% Contrast
%--------------------------------------------------------------------------
matlabbatch{3}.spm.stats.con.spmmat = {[outputdir filesep 'SPM.mat']};
matlabbatch{3}.spm.stats.con.consess{1}.tcon.name = contrast_name;
matlabbatch{3}.spm.stats.con.consess{1}.tcon.weights = 1;
matlabbatch{3}.spm.stats.con.consess{1}.tcon.sessrep = 'none';
matlabbatch{3}.spm.stats.con.delete = 0;
%% Results
%--------------------------------------------------------------------------
matlabbatch{4}.spm.stats.results.spmmat = {[outputdir filesep 'SPM.mat']};
matlabbatch{4}.spm.stats.results.conspec.titlestr = '';
matlabbatch{4}.spm.stats.results.conspec.contrasts = 1;
matlabbatch{4}.spm.stats.results.conspec.threshdesc = 'none';
matlabbatch{4}.spm.stats.results.conspec.thresh = 0.001;
matlabbatch{4}.spm.stats.results.conspec.extent = 5;
matlabbatch{4}.spm.stats.results.conspec.conjunction = 1;
matlabbatch{4}.spm.stats.results.conspec.mask.image.name = {'/Volumes/Project0255/dataset_3/derivatives/fmriprep/dataset3_averageGM.nii,1'};
matlabbatch{4}.spm.stats.results.conspec.mask.image.mtype = 0;
matlabbatch{4}.spm.stats.results.units = 1;
matlabbatch{4}.spm.stats.results.export{1}.pdf = true;
matlabbatch{4}.spm.stats.results.export{2}.jpg = true;
matlabbatch{4}.spm.stats.results.export{3}.csv = true;
matlabbatch{4}.spm.stats.results.export{4}.tspm.basename = contrast_name;
%% Run matlabbatch jobs
spm_jobman('run',matlabbatch);
Run it seperately for the datasets:
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset1"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset2"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset3"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset4"
Run the following (ROI_extract.m) script in matlab (change the code per dataset and roi and contrast - run this in MATLAB):
%========================================================================
% fROI analysis for fmriprep data in BIDS format
%========================================================================
% This script is written by Michaela Kent and Ruud Hortensius
% (University of Glasgow)
%
% Last updated: January 2020
%========================================================================
clear all
%add marsbar to path
marsbar('on')
%% Inputdirs
BIDS = spm_BIDS('/Volumes/Project0255/dataset_4'); % parse BIDS directory (easier to query info from dataset)
BIDSsecond=fullfile(BIDS.dir,'derivatives/bids_spm/second_level'); % get the second-level directory
contrastid = 'mental' %can be either mental (vs. pain) or pain (vs. mental)
networkid = 'tom' %can be either tom (theory-of-mind) or pain (pain matrix)
%% Outputdirs
outputdir=fullfile(BIDS.dir,'derivatives/roi', networkid); % root outputdir for sublist
spm_mkdir(outputdir); % create output directory
%% Load design matrix
spm_name = spm_load(fullfile(BIDSsecond, filesep, contrastid , 'SPM.mat'))
D = mardo(spm_name);
%% Load rois
parcels = dir(fullfile(BIDS.dir,'derivatives/parcels/', networkid))
parcels = struct2cell(parcels(arrayfun(@(x) ~strcmp(x.name(1),'.'),parcels)))
parcels(2:6,:) = []
for i=1:length(parcels)
roi = fullfile(BIDS.dir,'derivatives/parcels/', networkid, parcels{i})
R = maroi(roi);
% Fetch data into marsbar data object
mY = get_marsy(R, D, 'mean');
roi_data = summary_data(mY); % get summary time course(s)
roi_name = [outputdir,filesep,parcels{i},'.tsv'];
dlmwrite(roi_name,roi_data);
end
Add sub-201 and sub-202 to get the fROI data (different parameters, not included in the whole-brain analysis):
cd "/Volumes/Project0255/code/"
matlab -batch "BIDS_SPM_secondlevel_tom_dataset2_201_202"
matlab -batch "ROI_extract_201_202"
Dataset 2: sub-206-212, 219, 221-22, 224-25, 228, 231, 233-34 completed a version with the scale ranging from 1-10 instead of 0-10. Analyses should be run with and without these participants:
sub_ex = c(206:212, 219, 221:222, 224:225, 228, 231, 233:234)
Get the IDAQ data for all the participants:
library(tidyverse)
Registered S3 methods overwritten by 'dbplyr':
method from
print.tbl_lazy
print.tbl_sql
── Attaching packages ───────────────────────────────────────────────────────────────────────────────────────────────────── tidyverse 1.3.0 ──
✓ ggplot2 3.3.1 ✓ purrr 0.3.4
✓ tibble 3.0.1 ✓ dplyr 1.0.0
✓ tidyr 1.1.0 ✓ stringr 1.4.0
✓ readr 1.3.1 ✓ forcats 0.5.0
── Conflicts ──────────────────────────────────────────────────────────────────────────────────────────────────────── tidyverse_conflicts() ──
x dplyr::filter() masks stats::filter()
x dplyr::lag() masks stats::lag()
#load data (/Volumes/Project0255/dataset_1/sourcedata/)
DF.d1 <- read_csv(file = paste("IDAQ_dataset1.csv", sep ="")) %>%
gather("sub", "value", 4:32)
Parsed with column specification:
cols(
.default = col_double(),
scale = col_character(),
subscale = col_character()
)
See spec(...) for full column specifications.
DF.d2 <- read_csv(file = paste("IDAQ_dataset2.csv", sep ="")) %>%
gather("sub", "value", 4:38)
Parsed with column specification:
cols(
.default = col_double(),
scale = col_character(),
subscale = col_character()
)
See spec(...) for full column specifications.
DF.d3 <- read_csv(file = paste("IDAQ_dataset3.csv", sep ="")) %>%
gather("sub", "value", 4:25)
Parsed with column specification:
cols(
.default = col_double(),
scale = col_character(),
subscale = col_character()
)
See spec(...) for full column specifications.
DF.d4 <- read_csv(file = paste("IDAQ_dataset4.csv", sep ="")) %>%
gather("sub", "value", 4:25)
Parsed with column specification:
cols(
.default = col_double(),
scale = col_character(),
subscale = col_character()
)
See spec(...) for full column specifications.
DF.idaq <- bind_rows(DF.d1, DF.d2, DF.d3, DF.d4, .id = "dataset") %>%
mutate(sub=gsub('sub-','',sub))%>%
transform(sub=as.integer(sub)) %>%
mutate(scale = as.factor(ifelse(scale == "IDAQ-NA", "IDAQNA", "IDAQ")))
rm(DF.d1, DF.d2, DF.d3, DF.d4)
Check the reliability of the IDAQ scale:
library("psych")
Attaching package: ‘psych’
The following objects are masked from ‘package:ggplot2’:
%+%, alpha
DF.idaq %>%
filter(scale == "IDAQ") %>%
#filter(!sub %in% sub_ex) %>%
select(-scale, -subscale) %>%
spread(itemnr, value) %>%
select(-sub, -dataset) %>%
alpha(na.rm = TRUE)
Reliability analysis
Call: alpha(x = ., na.rm = TRUE)
lower alpha upper 95% confidence boundaries
0.75 0.8 0.86
Reliability if an item is dropped:
Item statistics
Check the reliability of the IDAQ-NA scale:
DF.idaq %>%
filter(scale == "IDAQNA") %>%
filter(!sub %in% sub_ex) %>%
select(-scale, -subscale) %>%
spread(itemnr, value) %>%
select(-sub, -dataset) %>%
alpha(na.rm = TRUE)
Reliability analysis
Call: alpha(x = ., na.rm = TRUE)
lower alpha upper 95% confidence boundaries
0.51 0.62 0.73
Reliability if an item is dropped:
Item statistics
Test if there are differences in IDAQ and IDAQ-NA scores between datasets.
Before fitting any model we establish which link function we need to use. This code is taken from Kevin Stadler’s github. In short, it uses the ordinal package to (quickly) find the link function that fits the data:
library(ordinal)
Attaching package: ‘ordinal’
The following object is masked from ‘package:dplyr’:
slice
cumulativemodelfit <- function(formula, data, links=c("logit", "probit", "cloglog", "cauchit"),
thresholds=c("flexible", "equidistant"), verbose=TRUE) {
names(links) <- links
names(thresholds) <- thresholds
llks <- outer(links, thresholds,
Vectorize(function(link, threshold)
# catch error for responses with 2 levels
tryCatch(ordinal::clm(formula, data=data, link=link, threshold=threshold)$logLik,
error = function(e) NA)))
print(llks)
if (verbose) {
bestfit <- which.max(llks)
cat("\nThe best link function is ", links[bestfit %% length(links)], " with a ",
thresholds[1 + bestfit %/% length(thresholds)], " threshold (logLik ", llks[bestfit],
")\n", sep="")
}
invisible(llks)
}
Run the analyse for the anthropomorphism subscale:
DF.test = DF.idaq %>%
filter(scale == "IDAQ") %>%
mutate_all(as.factor) %>%
mutate(value = factor(value, levels=c(0:10), ordered=TRUE)) #add contrast coding?
Get the link function:
cumulativemodelfit(value ~ 1, data=DF.test)
flexible equidistant
logit -3559.563 -3638.793
probit -3559.563 -3629.921
cloglog -3559.563 -3663.966
cauchit -3559.563 -3629.921
The best link function is logit with a flexible threshold (logLik -3559.563)
library(brms)
Registered S3 methods overwritten by 'htmltools':
method from
print.html tools:rstudio
print.shiny.tag tools:rstudio
print.shiny.tag.list tools:rstudio
Registered S3 method overwritten by 'htmlwidgets':
method from
print.htmlwidget tools:rstudio
Loading 'brms' package (version 2.13.0). Useful instructions
can be found by typing help('brms'). A more detailed introduction
to the package is available through vignette('brms_overview').
Attaching package: ‘brms’
The following objects are masked from ‘package:ordinal’:
ranef, VarCorr
The following object is masked from ‘package:psych’:
cs
The following object is masked from ‘package:stats’:
ar
Get summary and marginal effects:
ord.2 <- brm(
value ~ 1 + dataset +
(1|sub) + (1|itemnr),
data = DF.test,
family = cumulative("logit"),
file = 'ord.2~random.RDS'
)
Get summary and marginal effects:
ord.3 <- brm(
value ~ 1 + cs(dataset) +
(cs(1)|sub) + (cs(1)|itemnr),
data = DF.test,
family = acat("logit"),
file = 'ord.3~category.RDS'
)
Get summary and marginal effects:
summary(ord.3) #prob = .99 for 99 credible intervals
Family: acat
Links: mu = logit; disc = identity
Formula: value ~ 1 + cs(dataset) + (cs(1) | sub) + (cs(1) | itemnr)
Data: DF.test (Number of observations: 1616)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Group-Level Effects:
~itemnr (Number of levels: 15)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept[1]) 1.53 0.40 0.91 2.50 1.00 1155 1765
sd(Intercept[2]) 1.45 0.37 0.89 2.35 1.00 1681 2421
sd(Intercept[3]) 0.67 0.22 0.31 1.17 1.00 1861 2731
sd(Intercept[4]) 0.38 0.24 0.03 0.90 1.00 1272 1754
sd(Intercept[5]) 0.78 0.26 0.33 1.35 1.00 1893 1828
sd(Intercept[6]) 0.74 0.26 0.31 1.34 1.00 1904 1671
sd(Intercept[7]) 0.28 0.21 0.01 0.79 1.00 1550 1807
sd(Intercept[8]) 0.96 0.32 0.49 1.76 1.00 1993 2602
sd(Intercept[9]) 0.46 0.32 0.03 1.23 1.00 1319 1923
sd(Intercept[10]) 0.61 0.48 0.03 1.87 1.00 1258 1742
cor(Intercept[1],Intercept[2]) 0.51 0.20 0.07 0.84 1.00 866 1357
cor(Intercept[1],Intercept[3]) 0.45 0.22 -0.03 0.82 1.00 2384 2663
cor(Intercept[2],Intercept[3]) 0.36 0.23 -0.14 0.76 1.00 2784 2851
cor(Intercept[1],Intercept[4]) 0.22 0.28 -0.36 0.71 1.00 3197 3066
cor(Intercept[2],Intercept[4]) 0.15 0.29 -0.45 0.65 1.00 3717 2899
cor(Intercept[3],Intercept[4]) 0.05 0.28 -0.50 0.59 1.00 3312 2588
cor(Intercept[1],Intercept[5]) 0.28 0.24 -0.23 0.70 1.00 2434 3075
cor(Intercept[2],Intercept[5]) 0.27 0.24 -0.23 0.69 1.00 2491 2833
cor(Intercept[3],Intercept[5]) 0.22 0.25 -0.30 0.68 1.00 2156 2791
cor(Intercept[4],Intercept[5]) -0.04 0.29 -0.58 0.53 1.00 2030 3063
cor(Intercept[1],Intercept[6]) 0.14 0.25 -0.36 0.60 1.00 2136 2688
cor(Intercept[2],Intercept[6]) 0.19 0.25 -0.32 0.65 1.00 2660 2475
cor(Intercept[3],Intercept[6]) 0.22 0.26 -0.29 0.68 1.00 2086 2497
cor(Intercept[4],Intercept[6]) 0.09 0.29 -0.46 0.63 1.00 2129 2726
cor(Intercept[5],Intercept[6]) 0.06 0.27 -0.45 0.57 1.00 2920 3194
cor(Intercept[1],Intercept[7]) -0.03 0.29 -0.58 0.53 1.00 3606 2999
cor(Intercept[2],Intercept[7]) -0.00 0.29 -0.56 0.55 1.00 3884 3058
cor(Intercept[3],Intercept[7]) 0.06 0.29 -0.52 0.60 1.00 3212 3101
cor(Intercept[4],Intercept[7]) 0.03 0.30 -0.54 0.60 1.00 3279 2615
cor(Intercept[5],Intercept[7]) -0.01 0.29 -0.57 0.54 1.00 3679 3098
cor(Intercept[6],Intercept[7]) -0.01 0.30 -0.57 0.58 1.00 3542 3406
cor(Intercept[1],Intercept[8]) 0.33 0.23 -0.15 0.73 1.00 1872 2551
cor(Intercept[2],Intercept[8]) 0.33 0.23 -0.17 0.73 1.00 2696 3075
cor(Intercept[3],Intercept[8]) 0.38 0.23 -0.11 0.77 1.00 2340 2787
cor(Intercept[4],Intercept[8]) 0.11 0.28 -0.44 0.62 1.00 2159 3123
cor(Intercept[5],Intercept[8]) 0.39 0.23 -0.11 0.78 1.00 2418 3117
cor(Intercept[6],Intercept[8]) 0.19 0.25 -0.34 0.64 1.00 2855 3150
cor(Intercept[7],Intercept[8]) 0.03 0.29 -0.52 0.57 1.00 2854 3301
cor(Intercept[1],Intercept[9]) 0.05 0.28 -0.49 0.59 1.00 3374 2894
cor(Intercept[2],Intercept[9]) -0.02 0.29 -0.56 0.52 1.00 3732 2813
cor(Intercept[3],Intercept[9]) -0.02 0.28 -0.55 0.53 1.00 3578 3082
cor(Intercept[4],Intercept[9]) 0.10 0.30 -0.50 0.64 1.00 2700 2800
cor(Intercept[5],Intercept[9]) -0.03 0.29 -0.58 0.53 1.00 3211 2798
cor(Intercept[6],Intercept[9]) 0.11 0.28 -0.46 0.61 1.00 3265 3624
cor(Intercept[7],Intercept[9]) 0.03 0.30 -0.55 0.60 1.00 2714 3240
cor(Intercept[8],Intercept[9]) -0.06 0.29 -0.59 0.51 1.00 3220 3350
cor(Intercept[1],Intercept[10]) 0.12 0.27 -0.44 0.63 1.00 3782 2939
cor(Intercept[2],Intercept[10]) 0.14 0.28 -0.43 0.65 1.00 4177 3371
cor(Intercept[3],Intercept[10]) 0.09 0.28 -0.46 0.61 1.00 3063 2990
cor(Intercept[4],Intercept[10]) 0.08 0.28 -0.49 0.60 1.00 2526 3064
cor(Intercept[5],Intercept[10]) 0.08 0.28 -0.48 0.60 1.00 3649 3117
cor(Intercept[6],Intercept[10]) 0.12 0.28 -0.44 0.62 1.00 3097 3274
cor(Intercept[7],Intercept[10]) 0.05 0.30 -0.54 0.61 1.00 2947 3559
cor(Intercept[8],Intercept[10]) 0.09 0.28 -0.47 0.61 1.00 3249 3335
cor(Intercept[9],Intercept[10]) 0.05 0.29 -0.53 0.60 1.00 2579 3241
~sub (Number of levels: 108)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept[1]) 2.88 0.34 2.28 3.60 1.01 1201 2068
sd(Intercept[2]) 1.09 0.20 0.73 1.49 1.00 1303 2389
sd(Intercept[3]) 0.68 0.22 0.23 1.12 1.01 868 776
sd(Intercept[4]) 0.94 0.24 0.47 1.42 1.00 1061 1261
sd(Intercept[5]) 1.30 0.26 0.81 1.85 1.00 1058 1635
sd(Intercept[6]) 1.38 0.24 0.95 1.87 1.01 1305 2015
sd(Intercept[7]) 0.32 0.21 0.01 0.79 1.01 712 1274
sd(Intercept[8]) 0.85 0.22 0.42 1.30 1.00 968 1687
sd(Intercept[9]) 0.57 0.32 0.05 1.24 1.00 862 879
sd(Intercept[10]) 1.85 0.35 1.22 2.59 1.00 1843 2640
cor(Intercept[1],Intercept[2]) -0.14 0.18 -0.47 0.23 1.00 1401 1930
cor(Intercept[1],Intercept[3]) -0.19 0.20 -0.55 0.24 1.00 2775 2553
cor(Intercept[2],Intercept[3]) 0.28 0.23 -0.19 0.71 1.00 1415 2032
cor(Intercept[1],Intercept[4]) 0.16 0.20 -0.24 0.55 1.00 1595 2223
cor(Intercept[2],Intercept[4]) 0.30 0.21 -0.11 0.70 1.00 1440 2021
cor(Intercept[3],Intercept[4]) -0.05 0.26 -0.52 0.49 1.00 1248 1752
cor(Intercept[1],Intercept[5]) -0.16 0.16 -0.47 0.17 1.00 1374 2214
cor(Intercept[2],Intercept[5]) 0.25 0.20 -0.16 0.62 1.00 829 1729
cor(Intercept[3],Intercept[5]) 0.20 0.23 -0.29 0.63 1.00 702 1516
cor(Intercept[4],Intercept[5]) -0.08 0.23 -0.50 0.41 1.01 791 1382
cor(Intercept[1],Intercept[6]) -0.11 0.15 -0.40 0.20 1.00 1745 2303
cor(Intercept[2],Intercept[6]) -0.05 0.20 -0.43 0.34 1.01 819 1698
cor(Intercept[3],Intercept[6]) 0.02 0.23 -0.43 0.46 1.01 571 1353
cor(Intercept[4],Intercept[6]) -0.23 0.21 -0.61 0.21 1.01 606 1248
cor(Intercept[5],Intercept[6]) -0.59 0.15 -0.82 -0.27 1.00 1040 2118
cor(Intercept[1],Intercept[7]) -0.20 0.28 -0.68 0.40 1.00 3215 2971
cor(Intercept[2],Intercept[7]) 0.02 0.28 -0.54 0.56 1.00 3606 3022
cor(Intercept[3],Intercept[7]) 0.14 0.29 -0.45 0.67 1.00 2671 2602
cor(Intercept[4],Intercept[7]) -0.06 0.29 -0.59 0.52 1.00 3334 2996
cor(Intercept[5],Intercept[7]) 0.06 0.28 -0.50 0.59 1.00 3439 2921
cor(Intercept[6],Intercept[7]) 0.02 0.29 -0.54 0.57 1.00 3511 2715
cor(Intercept[1],Intercept[8]) -0.07 0.18 -0.41 0.28 1.00 2488 2570
cor(Intercept[2],Intercept[8]) 0.31 0.20 -0.10 0.67 1.00 1162 2060
cor(Intercept[3],Intercept[8]) 0.35 0.23 -0.16 0.72 1.00 819 1463
cor(Intercept[4],Intercept[8]) -0.04 0.23 -0.48 0.41 1.00 1159 2733
cor(Intercept[5],Intercept[8]) 0.09 0.22 -0.35 0.51 1.00 1857 2560
cor(Intercept[6],Intercept[8]) 0.29 0.21 -0.15 0.68 1.00 2214 3047
cor(Intercept[7],Intercept[8]) 0.07 0.29 -0.49 0.59 1.01 1288 2341
cor(Intercept[1],Intercept[9]) 0.10 0.25 -0.41 0.55 1.00 3369 2738
cor(Intercept[2],Intercept[9]) 0.02 0.26 -0.48 0.54 1.00 2645 2805
cor(Intercept[3],Intercept[9]) 0.16 0.29 -0.41 0.67 1.00 1209 2411
cor(Intercept[4],Intercept[9]) -0.01 0.28 -0.54 0.53 1.00 2334 2852
cor(Intercept[5],Intercept[9]) 0.17 0.28 -0.41 0.66 1.00 2245 2352
cor(Intercept[6],Intercept[9]) -0.10 0.27 -0.60 0.45 1.00 2950 2782
cor(Intercept[7],Intercept[9]) 0.04 0.30 -0.54 0.60 1.00 1735 2802
cor(Intercept[8],Intercept[9]) 0.17 0.28 -0.40 0.67 1.00 2414 2281
cor(Intercept[1],Intercept[10]) -0.43 0.14 -0.69 -0.15 1.00 1790 2431
cor(Intercept[2],Intercept[10]) -0.20 0.19 -0.56 0.19 1.00 893 1953
cor(Intercept[3],Intercept[10]) 0.15 0.23 -0.32 0.57 1.00 744 1878
cor(Intercept[4],Intercept[10]) -0.22 0.22 -0.64 0.22 1.01 889 1340
cor(Intercept[5],Intercept[10]) 0.40 0.18 0.00 0.72 1.00 1373 2426
cor(Intercept[6],Intercept[10]) -0.32 0.18 -0.64 0.04 1.00 1843 2542
cor(Intercept[7],Intercept[10]) 0.09 0.28 -0.48 0.60 1.00 1327 2446
cor(Intercept[8],Intercept[10]) -0.16 0.22 -0.57 0.28 1.00 1693 2621
cor(Intercept[9],Intercept[10]) 0.02 0.27 -0.50 0.53 1.00 1748 2945
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept[1] 0.65 0.78 -0.89 2.20 1.01 562 1036
Intercept[2] -0.33 0.54 -1.43 0.72 1.00 1062 1907
Intercept[3] 0.25 0.40 -0.56 1.03 1.00 972 1874
Intercept[4] 0.50 0.43 -0.30 1.38 1.00 1218 1932
Intercept[5] -0.47 0.50 -1.47 0.49 1.00 1185 1961
Intercept[6] 0.01 0.48 -0.93 0.96 1.00 1255 2374
Intercept[7] 0.00 0.31 -0.60 0.61 1.00 1692 2559
Intercept[8] 0.93 0.50 -0.01 1.96 1.00 1239 2046
Intercept[9] 0.91 0.47 0.03 1.87 1.00 1114 1800
Intercept[10] -0.06 0.73 -1.38 1.56 1.00 1226 1671
dataset2[1] 4.14 0.88 2.56 5.94 1.00 644 1534
dataset2[2] -0.46 0.43 -1.32 0.38 1.00 1263 1716
dataset2[3] 0.18 0.41 -0.63 0.98 1.00 1546 2427
dataset2[4] -0.44 0.49 -1.41 0.56 1.00 1493 2234
dataset2[5] -0.51 0.57 -1.64 0.58 1.00 1225 2184
dataset2[6] 0.19 0.56 -0.87 1.31 1.00 1336 1837
dataset2[7] 0.22 0.38 -0.51 0.99 1.00 1999 3036
dataset2[8] 0.20 0.45 -0.67 1.10 1.00 1565 2051
dataset2[9] -0.03 0.49 -0.98 0.92 1.00 1320 2455
dataset2[10] -0.34 0.74 -1.75 1.13 1.00 1177 1779
dataset3[1] 0.08 0.93 -1.76 1.96 1.00 636 1223
dataset3[2] 0.21 0.51 -0.81 1.23 1.00 1488 2479
dataset3[3] 0.21 0.43 -0.64 1.07 1.00 1389 2311
dataset3[4] -0.49 0.55 -1.60 0.57 1.00 1218 1927
dataset3[5] -0.87 0.64 -2.13 0.38 1.00 1344 2390
dataset3[6] 0.08 0.64 -1.15 1.37 1.00 1562 1953
dataset3[7] 0.25 0.45 -0.64 1.14 1.00 2077 2878
dataset3[8] -0.22 0.51 -1.26 0.76 1.00 1950 2266
dataset3[9] -0.09 0.57 -1.24 0.98 1.00 1605 2546
dataset3[10] -1.26 0.91 -3.05 0.56 1.00 1571 2289
dataset4[1] -0.12 0.89 -1.88 1.59 1.01 605 838
dataset4[2] 0.47 0.51 -0.52 1.45 1.00 1417 2478
dataset4[3] 0.13 0.46 -0.77 1.01 1.00 1528 2632
dataset4[4] -0.21 0.56 -1.31 0.89 1.00 1356 2396
dataset4[5] -0.84 0.64 -2.14 0.38 1.00 1366 2369
dataset4[6] 0.43 0.63 -0.78 1.70 1.00 1492 1888
dataset4[7] 0.40 0.43 -0.45 1.26 1.00 1785 2788
dataset4[8] -0.39 0.49 -1.36 0.53 1.00 1743 2074
dataset4[9] 0.12 0.57 -0.99 1.22 1.00 1742 2377
dataset4[10] -0.18 0.84 -1.83 1.45 1.01 1097 1429
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
ord.4 <- brm(
value ~ 1 + dataset +
(1|sub) + (1|itemnr),
data = DF.test,
family = acat("logit"),
file = 'ord.4~category2.RDS'
)
Get summary and marginal effects: #should add 1|itemnr and 1|sub
summary(ord.4) #prob = .99 for 99 credible intervals
Family: acat
Links: mu = logit; disc = identity
Formula: value ~ 1 + dataset + (1 | sub) + (1 | itemnr)
Data: DF.test (Number of observations: 1616)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Group-Level Effects:
~itemnr (Number of levels: 15)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.55 0.12 0.37 0.85 1.00 591 869
~sub (Number of levels: 108)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 0.27 0.03 0.22 0.32 1.00 1223 2264
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept[1] 0.24 0.18 -0.11 0.60 1.01 243 501
Intercept[2] 0.21 0.19 -0.18 0.58 1.01 276 570
Intercept[3] 0.06 0.20 -0.34 0.48 1.01 297 661
Intercept[4] 0.54 0.21 0.13 0.96 1.01 335 746
Intercept[5] -0.18 0.22 -0.62 0.25 1.01 335 808
Intercept[6] 0.47 0.22 0.06 0.90 1.01 379 705
Intercept[7] 0.20 0.22 -0.23 0.62 1.01 322 635
Intercept[8] 0.49 0.21 0.08 0.90 1.01 349 609
Intercept[9] 1.36 0.23 0.91 1.83 1.01 393 1067
Intercept[10] 0.23 0.23 -0.25 0.68 1.00 367 995
dataset2 0.08 0.07 -0.07 0.22 1.00 759 1419
dataset3 -0.13 0.08 -0.29 0.04 1.01 797 1199
dataset4 -0.03 0.08 -0.19 0.13 1.01 755 1456
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
conditional_effects(ord.4, "dataset", categorical = TRUE)
ord.5 <- brm(
formula = bf(value ~ 1 + dataset) +
lf(disc ~ 0 + dataset, cmc = FALSE),
data = DF.test,
family = cumulative("logit"),
file = 'ord.5~control.RDS'
)
Get summary and marginal effects:
summary(ord.5) #prob = .99 for 99 credible intervals
Family: cumulative
Links: mu = logit; disc = log
Formula: value ~ 1 + dataset
disc ~ 0 + dataset
Data: DF.test (Number of observations: 1616)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept[1] -0.73 0.10 -0.93 -0.53 1.00 1834 2599
Intercept[2] -0.15 0.09 -0.33 0.03 1.00 2488 2795
Intercept[3] 0.17 0.09 -0.00 0.34 1.00 2667 3048
Intercept[4] 0.44 0.09 0.26 0.61 1.00 2701 3031
Intercept[5] 0.61 0.09 0.43 0.79 1.00 2626 2935
Intercept[6] 0.87 0.10 0.68 1.06 1.00 2441 2828
Intercept[7] 1.11 0.10 0.91 1.31 1.00 2217 2914
Intercept[8] 1.47 0.11 1.25 1.70 1.00 1978 2784
Intercept[9] 2.02 0.13 1.77 2.29 1.00 1627 2340
Intercept[10] 2.45 0.16 2.15 2.76 1.00 1586 2331
dataset2 0.32 0.10 0.13 0.52 1.00 2594 2944
dataset3 -0.11 0.12 -0.35 0.13 1.00 2696 3110
dataset4 -0.03 0.13 -0.28 0.21 1.00 2906 2956
disc_dataset2 0.38 0.07 0.25 0.51 1.00 1372 2188
disc_dataset3 0.19 0.08 0.03 0.35 1.00 1588 2452
disc_dataset4 0.06 0.08 -0.11 0.22 1.00 1826 2651
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
conditional_effects(ord.5, "dataset", categorical = TRUE)
modelC1
Output of model 'ord.1':
Computed from 4000 by 1616 log-likelihood matrix
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'ord.2':
Computed from 4000 by 1616 log-likelihood matrix
------
Monte Carlo SE of elpd_loo is 0.1.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'ord.3':
Computed from 4000 by 1616 log-likelihood matrix
------
Monte Carlo SE of elpd_loo is 0.2.
Pareto k diagnostic values:
Count Pct. Min. n_eff
(-Inf, 0.5] (good) 1614 99.9% 728
(0.5, 0.7] (ok) 2 0.1% 1193
(0.7, 1] (bad) 0 0.0% <NA>
(1, Inf) (very bad) 0 0.0% <NA>
All Pareto k estimates are ok (k < 0.7).
See help('pareto-k-diagnostic') for details.
Output of model 'ord.4':
Computed from 4000 by 1616 log-likelihood matrix
------
Monte Carlo SE of elpd_loo is 0.2.
Pareto k diagnostic values:
Count Pct. Min. n_eff
(-Inf, 0.5] (good) 1615 99.9% 405
(0.5, 0.7] (ok) 1 0.1% 2379
(0.7, 1] (bad) 0 0.0% <NA>
(1, Inf) (very bad) 0 0.0% <NA>
All Pareto k estimates are ok (k < 0.7).
See help('pareto-k-diagnostic') for details.
Output of model 'ord.5':
Computed from 4000 by 1616 log-likelihood matrix
------
Monte Carlo SE of elpd_loo is 0.1.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Model comparisons:
elpd_diff se_diff
ord.2 0.0 0.0
ord.3 -1.5 16.5
ord.4 -46.0 10.6
ord.5 -631.9 28.5
ord.1 -648.4 27.2
cumulativemodelfit(value ~ 1, data=DF.test)
flexible equidistant
logit -2975.673 -3023.229
probit -2975.673 -3014.616
cloglog -2975.673 -3023.926
cauchit -2975.673 -3014.616
The best link function is probit with a equidistant threshold (logLik -2975.673)
ord.2E <- brm(
value ~ 1 + dataset +
(1|sub) + (1|itemnr),
data = DF.test,
family = cumulative(link = "probit", threshold="equidistant"),
file = 'ord.2E~random.RDS'
)
Get summary and marginal effects:
summary(ord.2E) #prob = .99 for 99 credible intervals
conditional_effects(ord.2E, "dataset", categorical = TRUE)
ord.3E <- brm(
value ~ 1 + cs(dataset) +
(cs(1)|sub) + (cs(1)|itemnr),
data = DF.test,
family = acat(link = "probit", threshold="equidistant"),
file = 'ord.3E~category.RDS'
)
summary(ord.3E)
Final output (Table S7):
# load required packages
library(sjPlot)
library(insight)
library(httr)
tab_model(
ord.2,ord.3,ord.2E,
show.se = FALSE,
#collapse.se = TRUE,
#pred.labels = c("Intercept", "linear predictor", "quadratic predictor"),
dv.labels = c("cumulative ordinal model", "cumulative ordinal model (excluding 16 participants)"),
show.obs=FALSE
)
Calculate the IDAQ per subject:
DF.idaq <- DF.idaq %>%
group_by(sub,dataset, scale) %>%
summarise(score = sum(value, na.rm = TRUE)) %>%
ungroup()%>%
mutate_at(vars(-score),as.factor)
`summarise()` regrouping output by 'sub', 'dataset' (override with `.groups` argument)
Visualise the scores across the datasets and scales (Figure S1):
source("R_rainclouds.R") #/Volumes/Project0255/code
source("summarySE.R")
theme_set(theme_classic(base_size = 15))
idaq_sum <- summarySEwithin(DF.idaq, measurevar="score", betweenvars="dataset", withinvars= "scale", idvar="sub")
FS1 <- DF.idaq %>%
group_by(sub,dataset, scale) %>%
ggplot(.,aes(x=dataset,y=score,fill=dataset, group = dataset))+
geom_flat_violin(position=position_nudge(x = .2, y = 0),adjust =2, trim = FALSE, alpha = .75, colour = "Black") +
geom_point(aes(colour = dataset), position=position_jitter(width = .05), size = .5, shape = 21, colour = "Black") +
geom_boxplot(aes(x=dataset,y=score),position=position_nudge(x = .1, y = 0),outlier.shape = NA, alpha = .5, width = .1, colour = "black") +
geom_point(data = idaq_sum, aes(x = dataset, y = score), position = position_nudge(.3), colour = "BLACK")+
geom_errorbar(data = idaq_sum, aes(x=dataset,y=score, ymin = score-ci, ymax = score+ci), position=position_nudge(x = .3, y = 0), width = .05)+
scale_fill_brewer(palette = "Greys") +
scale_colour_brewer(palette = "Greys") +
ylab(paste("score (0-150)")) +
ggtitle(paste("IDAQ scores across datasets")) +
theme(legend.position="none") +
facet_wrap(~scale)
FS1
ggsave("S1.TIFF", plot = FS1, width = 24.7, height = 15.24, units = "cm", dpi = 300)
Visualise the scores for IDAQ scale only (Figure 1A):
idaq_sum <- summarySEwithin(DF.idaq, measurevar="score", withinvars= "scale", idvar="sub")
F1A <- DF.idaq %>%
filter(scale == "IDAQ") %>%
group_by(sub) %>%
ggplot(.,aes(x = 1, y=score, fill = "Black"))+
geom_flat_violin(position=position_nudge(x = .2, y = 0),adjust =2, trim = FALSE, alpha = .75, colour = "Black") +
geom_point(aes(colour = dataset), position=position_jitter(width = .05), size = .5, shape = 21, colour = "Black") +
geom_boxplot(aes(x=1,y=score),position=position_nudge(x = .12, y = 0),outlier.shape = NA, alpha = .5, width = .1, colour = "black") +
geom_point(data = idaq_sum %>% filter(scale == "IDAQ"), aes(x = 0.95, y = score), position = position_nudge(.3), colour = "BLACK")+
geom_errorbar(data = idaq_sum %>% filter(scale == "IDAQ"), aes(x=0.95,y=score, ymin = score-ci, ymax = score+ci), position=position_nudge(x = .3, y = 0), width = .05)+
scale_fill_brewer(palette = "Greys") +
scale_colour_brewer(palette = "Greys") +
#theme_classic() +
coord_fixed(ratio = 1/90) +
ylab(paste("Dispositional anthropomorphism (0-150)")) +
#ggtitle(paste("Dispositional anthropomorphism")) +
theme(legend.position="none") +
theme(
axis.title.x = element_blank(),
axis.text.x = element_blank(),
axis.ticks.x = element_blank()
)
F1A
ggsave("F1A.TIFF", plot = F1A, width = 24.7, height = 15.24, units = "cm", dpi = 300)
Calculate the median IQR per dataset (table S2):
DF.idaq %>%
#filter(!sub %in% sub_ex) %>%
group_by(sub,dataset, scale) %>%
summarise(score = sum(score)) %>%
group_by(scale) %>%
summarise(median = median(score),
iqr = IQR(score))
`summarise()` regrouping output by 'sub', 'dataset' (override with `.groups` argument)
`summarise()` ungrouping output (override with `.groups` argument)
Create function to load the data for the different networks:
library(fs)
library(tidyverse)
roi_extract <- function(datasetno, substart, subend, network, nroi) {
dir_ls(paste("dataset_", datasetno, "/derivatives/roi/", network, sep = ""), regexp = "\\.tsv$") %>%
map_dfr(read.delim, sep = "\t", .id = "id", header = FALSE) %>%
mutate(dataset = datasetno) %>%
mutate(network = network) %>%
mutate(network = str_extract(network, "tom|pain")) %>%
mutate(id = str_extract(id, "dmpfc|mmpfc|vmpfc|ltpj|rtpj|prec|amcc|lmfg|rmfg|ls2|rs2|linsula|rinsula")) %>%
rename(roi = id, contrast = V1) %>%
mutate(sub = rep(substart:subend, times=nroi, each=1)) %>%
select(5,3,4,1:2)
}
Load the data for the Theory-of-Mind network:
DF.d1 <- roi_extract(1, 101, 129, "tom", 6)
DF.d2.a <- roi_extract(2, 201, 202, "tom/201_202", 6)
DF.d2.b <- roi_extract(2, 203, 235, "tom", 6)
DF.d3 <- roi_extract(3, 301, 322, "tom", 6)
DF.d4 <- roi_extract(4, 401, 422, "tom", 6)
DF.temp <- bind_rows(DF.d1, DF.d2.a, DF.d2.b, DF.d3, DF.d4)
Load the data for the Pain Matrix:
DF.d1 <- roi_extract(1, 101, 129, "pain", 7)
DF.d2.a <- roi_extract(2, 201, 202, "pain/201_202/", 7)
DF.d2.b <- roi_extract(2, 203, 235, "pain", 7)
DF.d3 <- roi_extract(3, 301, 322, "pain", 7)
DF.d4 <- roi_extract(4, 401, 422, "pain", 7)
DF.roi <- bind_rows(DF.temp, DF.d1, DF.d2.a, DF.d2.b, DF.d3, DF.d4)
rm(DF.d1, DF.d2.a, DF.d2.b, DF.d3, DF.d4, DF.temp)
Reorder ROI names for plots:
order <- c("rtpj", "ltpj", "prec", "vmpfc","mmpfc","dmpfc", "rs2", "ls2", "rinsula", "linsula", "rmfg", "lmfg", "amcc")
DF.roi <- DF.roi %>%
mutate_at(vars(-contrast),as.factor) %>%
group_by(sub, dataset) %>%
mutate(roi = fct_relevel(roi, order))
Plot the ToM activity across regions and datasets (Figure S2):
source("R_rainclouds.R") #/Volumes/Project0255/code
source("summarySE.R")
theme_set(theme_classic(base_size = 15))
roi_sum <- summarySEwithin(DF.roi, measurevar="contrast", betweenvars="dataset", withinvars= c("roi","network"), idvar="sub")
FS2 <- DF.roi %>%
filter(network == "tom") %>%
ggplot(.,aes(x=roi,y=contrast,fill=roi))+
geom_hline(yintercept = 0, color = "grey", linetype = 2) +
geom_flat_violin(position=position_nudge(x = .2, y = 0),adjust =2, trim = FALSE, colour = "Black") +
geom_point(aes(colour = roi, fill = roi), position=position_jitter(width = .05), size = .5, shape = 21, colour = "Black") +
geom_boxplot(aes(x=roi,y=contrast),position=position_nudge(x = .1, y = 0),outlier.shape = NA, alpha = .5, width = .1, colour = "black") +
geom_point(data = roi_sum %>% filter(network == "tom"), aes(x = roi, y = contrast), position = position_nudge(.3), colour = "BLACK")+
geom_errorbar(data = roi_sum %>% filter(network == "tom"), aes(x=roi,y=contrast, ymin = contrast-ci, ymax = contrast+ci), position=position_nudge(x = .3, y = 0), width = .05)+
#theme_classic() +
ylab(paste("contrast estimates (mental > pain)")) +
scale_fill_brewer(palette = "Blues") +
scale_colour_brewer(palette = "Blues") +
ggtitle(paste("Theory-of-Mind network contrasts estimates across datasets and regions")) +
theme(legend.position="none") +
facet_wrap(~dataset)
FS2
ggsave("S2.TIFF", plot = FS2, width = 24.7, height = 15.24, units = "cm", dpi = 300)
Plot the ToM activity across regions (Figure 1C):
source("R_rainclouds.R") #/Volumes/Project0255/code
source("summarySE.R")
theme_set(theme_classic(base_size = 15))
roi_sum <- summarySEwithin(DF.roi, measurevar="contrast", withinvars= c("roi","network"), idvar="sub")
F1C <- DF.roi %>%
filter(network == "tom") %>%
ggplot(.,aes(x=roi,y=contrast,fill=roi))+
geom_hline(yintercept = 0, color = "grey", linetype = 2) +
geom_flat_violin(position=position_nudge(x = .2, y = 0),adjust =2, trim = FALSE, colour = "Black") +
geom_point(aes(colour = roi, fill = roi), position=position_jitter(width = .05), size = .5, shape = 21, colour = "Black") +
geom_boxplot(aes(x=roi,y=contrast),position=position_nudge(x = .1, y = 0),outlier.shape = NA, alpha = .5, width = .1, colour = "black") +
geom_point(data = roi_sum %>% filter(network == "tom"), aes(x = roi, y = contrast), position = position_nudge(.3), colour = "BLACK")+
geom_errorbar(data = roi_sum %>% filter(network == "tom"), aes(x=roi,y=contrast, ymin = contrast-ci, ymax = contrast+ci), position=position_nudge(x = .3, y = 0), width = .05)+
#theme_classic() +
ylab(paste("Theory-of-Mind network activation")) +
scale_fill_brewer(palette = "Blues") +
scale_colour_brewer(palette = "Blues") +
#ggtitle(paste("Theory-of-Mind network contrasts estimates across datasets and regions")) +
theme(legend.position="none")
F1C
ggsave("F1C.TIFF", plot = F1C, width = 24.7, height = 15.24, units = "cm", dpi = 300)
Test if for potential differences between datasets and rois:
p
$roi
NA
# load required packages
library(sjPlot)
#refugeeswelcome
Plot the Pain Matrix activity across regions and datasets (Figure S3):
FS3 <- DF.roi %>%
filter(network == "pain") %>%
ggplot(.,aes(x=roi,y=contrast,fill=roi))+
geom_hline(yintercept = 0, color = "grey", linetype = 2) +
geom_flat_violin(position=position_nudge(x = .2, y = 0),adjust =2, trim = FALSE, colour = "Black") +
geom_point(aes(colour = roi, fill = roi), position=position_jitter(width = .05), size = .5, shape = 21, colour = "Black") +
geom_boxplot(aes(x=roi,y=contrast),position=position_nudge(x = .1, y = 0),outlier.shape = NA, alpha = .5, width = .1, colour = "black") +
geom_point(data = roi_sum %>% filter(network == "pain"), aes(x = roi, y = contrast), position = position_nudge(.3), colour = "BLACK")+
geom_errorbar(data = roi_sum %>% filter(network == "pain"), aes(x=roi,y=contrast, ymin = contrast-ci, ymax = contrast+ci), position=position_nudge(x = .3, y = 0), width = .05)+
#theme_classic() +
ylab(paste("contrast estimates (pain > mental)")) +
scale_fill_brewer(palette = "Reds") +
scale_colour_brewer(palette = "Reds") +
ggtitle(paste("Pain Matrix contrasts estimates across datasets and regions")) +
theme(legend.position="none") +
facet_wrap(~dataset)
FS3
ggsave("S3.TIFF", plot = FS3, width = 24.7, height = 15.24, units = "cm", dpi = 300)
Test if for potential differences between datasets and rois:
pain.compare <- brm(
formula = contrast ~ roi * dataset,
data = DF.roi %>% filter(network == "pain"),
#family = cumulative("logit"),
file = 'pain.compare.RDS'
)
summary(pain.compare)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: contrast ~ roi * dataset
Data: DF.roi %>% filter(network == "pain") (Number of observations: 756)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 0.73 0.07 0.59 0.86 1.00 1162 1832
roils2 -0.14 0.09 -0.32 0.05 1.00 1699 2755
roirinsula -0.42 0.10 -0.61 -0.24 1.00 1592 2578
roilinsula -0.44 0.10 -0.63 -0.26 1.00 1428 2394
roirmfg -0.53 0.10 -0.72 -0.34 1.00 1480 2705
roilmfg -0.35 0.10 -0.54 -0.17 1.00 1735 2682
roiamcc -0.54 0.10 -0.72 -0.35 1.00 1508 2540
dataset2 0.38 0.09 0.19 0.56 1.00 1645 2137
dataset3 -0.11 0.11 -0.32 0.09 1.00 1224 2211
dataset4 -0.08 0.10 -0.29 0.12 1.00 1646 2118
roils2:dataset2 -0.04 0.13 -0.29 0.21 1.00 2224 3033
roirinsula:dataset2 -0.24 0.13 -0.49 0.01 1.00 2080 2455
roilinsula:dataset2 -0.12 0.13 -0.38 0.14 1.00 2196 3008
roirmfg:dataset2 -0.35 0.13 -0.61 -0.09 1.00 2097 2840
roilmfg:dataset2 -0.53 0.13 -0.79 -0.27 1.00 2249 2796
roiamcc:dataset2 -0.12 0.13 -0.37 0.13 1.00 2070 3017
roils2:dataset3 0.05 0.15 -0.24 0.34 1.00 1786 2526
roirinsula:dataset3 -0.01 0.15 -0.29 0.28 1.00 1711 2230
roilinsula:dataset3 0.06 0.15 -0.22 0.35 1.00 1485 2966
roirmfg:dataset3 0.05 0.15 -0.24 0.34 1.00 1734 2535
roilmfg:dataset3 -0.04 0.15 -0.33 0.25 1.00 1881 2503
roiamcc:dataset3 0.22 0.15 -0.07 0.51 1.00 1839 2592
roils2:dataset4 0.06 0.14 -0.22 0.33 1.00 2078 2870
roirinsula:dataset4 0.07 0.15 -0.21 0.35 1.00 2141 2734
roilinsula:dataset4 0.18 0.15 -0.10 0.47 1.00 2028 2931
roirmfg:dataset4 0.07 0.15 -0.22 0.36 1.00 2079 2366
roilmfg:dataset4 -0.17 0.15 -0.45 0.13 1.00 2059 2524
roiamcc:dataset4 0.24 0.15 -0.04 0.53 1.00 2147 2756
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 0.36 0.01 0.35 0.38 1.00 5135 3017
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
plot(pain.compare)
conditional_effects(pain.compare)
# load required packages
library(sjPlot)
library(insight)
library(httr)
tab_model(
pain.compare,
show.se = FALSE,
#collapse.se = TRUE,
#pred.labels = c("Intercept", "linear predictor", "quadratic predictor"),
#dv.labels = c("cumulative ordinal model", "cumulative ordinal model (excluding 16 participants)"),
show.obs=FALSE
)
Create one DF:
DF.roi <- DF.idaq %>%
group_by(sub,dataset, scale) %>%
ungroup() %>%
left_join(DF.roi, DF.idaq, by = c("sub","dataset"), keep = FALSE) %>%
pivot_wider(names_from=scale, values_from = score) #
Center the variables for the formal analysis:
DF.roi$cent_IDAQNA <- scale(DF.roi$IDAQNA, scale = TRUE)
DF.roi$cent_IDAQ <- scale(DF.roi$IDAQ, scale = TRUE)
DF.roi <- DF.roi %>% group_by(roi) %>% mutate(cent_contrast = scale(contrast, scale =TRUE)) #scale per roi (analyses are done per roi)
Create scatterplots with linear and non-linear lines for ToM network:
library(patchwork)
theme_set(theme_classic(base_size = 15))
F1D <- DF.roi %>%
filter(network == "tom") %>%
ggplot(aes(x=cent_IDAQ, y=cent_contrast)) +
geom_point(alpha=0.5,show.legend = FALSE) +
geom_smooth(method="lm", formula=y ~ x, se=TRUE, show.legend = FALSE, colour="#0072B2") +
geom_smooth(method="lm", formula=y ~ x+ I(x^2), se=TRUE, show.legend=FALSE, colour = "#D55E00") +
coord_fixed(ratio = 1/1) +
labs(
x="Dispositional anthropomorphism",
y="Theory-of-Mind network activation"
) #+ theme_classic()
F1D
ggsave("F1D.TIFF", plot = F1D, width = 24.7, height = 15.24, units = "cm", dpi = 300)
Create scatterplots with linear and non-linear lines for individual regions of ToM network:
F1E <- DF.roi %>%
filter(network == "tom") %>%
ggplot(aes(x=cent_IDAQ, y=cent_contrast)) +
geom_point(alpha=0.5,show.legend = FALSE) +
geom_smooth(method="lm", formula=y ~ x, se=TRUE, show.legend = FALSE, colour="#0072B2") +
geom_smooth(method="lm", formula=y ~ x+ I(x^2), se=TRUE, show.legend=FALSE, colour = "#D55E00") +
coord_fixed(ratio = 1/1) +
facet_wrap(~roi, ncol = 6) +
labs(
x="Dispositional anthropomorphism",
y="Theory-of-Mind network activation"
) + theme_classic()
F1E
ggsave("F1E.TIFF", plot = F1E, width = 24.7, height = 15.24, units = "cm", dpi = 300)
For the formal analysis we will test a linear and quadratic relationship between IDAQ and ToM network activity:
DF.roi$cent_IDAQ2 <- DF.roi$cent_IDAQ^2
We run the models with uninformative (default) priors and create a function to run it for each region separately:
reg_model <- function(region){
brm(formula = cent_contrast ~ cent_IDAQ + cent_IDAQ2,
data = DF.roi %>% filter(roi == region),
family = gaussian,
chains = 4,
iter = 4000,
seed = 42, #so the model is reproducible
file = paste0(region, ".RDS"))
}
Run the models for the ToM network first. It will load the model if it is already calculated:
library("brms")
rtpj <- reg_model("rtpj")
ltpj <- reg_model("ltpj")
prec <- reg_model("prec")
vmpfc <- reg_model("vmpfc")
mmpfc <- reg_model("mmpfc")
dmpfc <- reg_model("dmpfc")
Get the summaries (verbatim, run individually):
summary(rtpj)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: cent_contrast ~ cent_IDAQ + cent_IDAQ2
Data: DF.roi %>% filter(roi == region) (Number of observations: 108)
Samples: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
total post-warmup samples = 8000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 0.03 0.12 -0.20 0.27 1.00 8407 6013
cent_IDAQ -0.05 0.10 -0.24 0.15 1.00 8721 6323
cent_IDAQ2 -0.03 0.07 -0.17 0.10 1.00 9151 5801
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 1.02 0.07 0.90 1.17 1.00 7654 5451
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Use sjPlot to create html tables following procedure outlined here:
# load required packages
library(sjPlot)
library(insight)
library(httr)
tab_model(
rtpj,ltpj,prec,vmpfc,mmpfc,dmpfc,
show.se = TRUE,
#collapse.se = TRUE,
pred.labels = c("Intercept", "linear predictor", "quadratic predictor"),
dv.labels = c("rtpj", "ltpj","prec","vmpfc","mmpfc","dmpfc"),
show.obs=FALSE
)
Create plots based on this tutorial:
library(tidyverse)
tom_combined <- bind_rows("rtpj" = as_tibble(as.mcmc(rtpj, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"), combine_chains = TRUE)),
"ltpj" = as_tibble(as.mcmc(ltpj, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"prec" = as_tibble(as.mcmc(prec, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"vmpfc" = as_tibble(as.mcmc(vmpfc, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"mmpfc" = as_tibble(as.mcmc(mmpfc, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"dmpfc" = as_tibble(as.mcmc(dmpfc, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
.id = "roi")
order <- c("rtpj", "ltpj", "prec", "vmpfc","mmpfc","dmpfc", "rs2", "ls2", "rinsula", "linsula", "rmfg", "lmfg", "amcc")
tom_combined <- tom_combined %>%
rename(linear = b_cent_IDAQ, quadratic = b_cent_IDAQ2) %>%
pivot_longer(c("linear", "quadratic"),names_to = "varIDAQ") %>%
mutate(roi = fct_relevel(roi, order[1:6]))
Plot the distributions:
F2 <- tom_combined %>%
mutate_at(vars(-value),as.factor) %>%
rename(predictor = varIDAQ) %>%
ggplot(., aes(value, fill = predictor)) +
geom_density(alpha = .5)+
geom_vline(xintercept = 0, color = "black", linetype = 2) +
ggtitle(paste("Posterior distributions for the Theory-of-Mind network")) +
facet_wrap(~roi, ncol =1) +
coord_fixed(ratio = 1/40) +
theme_classic(base_size = 8) +
theme(strip.background = element_blank()) +
scale_fill_manual(values=c("#0072B2", "#D55E00"))+
theme(axis.text.y = element_blank())
F2
ggsave("F2.TIFF", plot = F2, width = 24.7, height = 15.24, units = "cm", dpi = 300)
Create scatterplots with linear and non-linear lines for the Pain Matrix:
S4 <- DF.roi %>%
filter(network == "pain") %>%
ggplot(aes(x=cent_IDAQ, y=cent_contrast)) +
geom_point(alpha=0.5,show.legend = FALSE) +
geom_smooth(method="lm", formula=y ~ x, se=TRUE, show.legend = FALSE, colour="#0072B2") +
geom_smooth(method="lm", formula=y ~ x+ I(x^2), se=TRUE, show.legend=FALSE, colour = "#D55E00") +
#coord_fixed(ratio = 25/1) +
labs(
x="IDAQ score",
y="contrast (pain vs mental)"
) + theme_classic()
S4
ggsave("S4.TIFF", plot = S4, width = 24.7, height = 15.24, units = "cm", dpi = 300)
Create scatterplots with linear and non-linear lines for individual regions of the Pain Matrix:
S5 <- DF.roi %>%
filter(network == "pain") %>%
ggplot(aes(x=cent_IDAQ, y=cent_contrast)) +
geom_point(alpha=0.5,show.legend = FALSE) +
geom_smooth(method="lm", formula=y ~ x, se=TRUE, show.legend = FALSE, colour="#0072B2") +
geom_smooth(method="lm", formula=y ~ x+ I(x^2), se=TRUE, show.legend=FALSE, colour = "#D55E00") +
#coord_fixed(ratio = 25/1) +
labs(
x="IDAQ score",
y="contrast (pain vs mental)"
) + theme_classic() +
facet_wrap(~roi, nrow = 2)
S5
ggsave("S5.TIFF", plot = S5, width = 24.7, height = 15.24, units = "cm", dpi = 300)
Run the models for the Pain Matrix (control network). It will load the model if it is already calculated:
rs2 <- reg_model("rs2")
ls2 <- reg_model("ls2")
rinsula <- reg_model("rinsula")
linsula <- reg_model("linsula")
rmfg <- reg_model("rmfg")
lmfg <- reg_model("lmfg")
amcc <- reg_model("amcc")
Get the summaries (verbatim, run individually):
summary(rs2)
summary(ls2)
summary(rinsula)
summary(linsula)
summary(rmfg)
summary(lmfg)
summary(amcc)
Use sjPlot to create html tables following procedure outlined here:
tab_model(
rs2,ls2,rinsula,linsula,rmfg,lmfg,amcc,
show.se = TRUE,
#collapse.se = TRUE,
pred.labels = c("Intercept", "linear predictor", "quadratic predictor"),
dv.labels = c("rs2","ls2","rinsula","linsula","rmfg","lmfg","amcc"),
show.obs=FALSE
)
Create plots based on this tutorial:
pain_combined <- bind_rows("rs2" = as_tibble(as.mcmc(rs2, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"), combine_chains = TRUE)),
"ls2" = as_tibble(as.mcmc(ls2, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"rinsula" = as_tibble(as.mcmc(rinsula, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"linsula" = as_tibble(as.mcmc(linsula, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"rmfg" = as_tibble(as.mcmc(rmfg, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"lmfg" = as_tibble(as.mcmc(lmfg, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
"amcc" = as_tibble(as.mcmc(amcc, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"),combine_chains = TRUE)),
.id = "roi")
pain_combined <- pain_combined %>%
rename(linear = b_cent_IDAQ, quadratic = b_cent_IDAQ2) %>%
pivot_longer(c("linear", "quadratic"),names_to = "varIDAQ") %>%
mutate(roi = fct_relevel(roi, order[7:13]))
Plot the distributions:
S6 <- pain_combined %>%
mutate_at(vars(-value),as.factor) %>%
rename(predictor = varIDAQ) %>%
ggplot(., aes(value, fill = predictor)) +
geom_density(alpha = .5)+
geom_vline(xintercept = 0, color = "black", linetype = 2) +
ggtitle(paste("Posterior distributions for the Pain Matrix")) +
facet_wrap(~roi, ncol =1) +
coord_fixed(ratio = 1/40) +
theme_classic(base_size = 8) +
theme(strip.background = element_blank()) +
scale_fill_manual(values=c("#0072B2", "#D55E00"))
#theme(axis.text.y = element_blank())
S6
ggsave("S6.TIFF", plot = S6, width = 24.7, height = 15.24, units = "cm", dpi = 300)
All ToM regions combined:
tom <- brm(formula = cent_contrast ~ cent_IDAQ + cent_IDAQ2,
data = DF.roi %>% filter(network == "tom"),
family = gaussian,
chains = 4,
iter = 4000,
seed = 42, #so the model is reproducible
file = "tom.RDS")
Summary:
summary(tom)
All ToM regions combined:
pain <- brm(formula = cent_contrast ~ cent_IDAQ + cent_IDAQ2,
data = DF.roi %>% filter(network == "pain"),
family = gaussian,
chains = 4,
iter = 4000,
seed = 42, #so the model is reproducible
file = "pain.RDS")
Summary:
summary(pain)
tab_model(
tom, pain,
show.se = TRUE,
#collapse.se = TRUE,
pred.labels = c("Intercept", "linear predictor", "quadratic predictor"),
dv.labels = c("tom","pain"),
show.obs=FALSE
)
tom_all <- bind_rows("all" = as_tibble(as.mcmc(tom, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"), combine_chains = TRUE), .id = "network") %>%
rename(linear = b_cent_IDAQ, quadratic = b_cent_IDAQ2) %>%
pivot_longer(c("linear", "quadratic"),names_to = "varIDAQ") %>%
mutate(network = "tom"))
pain_all <- bind_rows("all" = as_tibble(as.mcmc(pain, pars = c("b_cent_IDAQ", "b_cent_IDAQ2"), combine_chains = TRUE), .id = "network") %>%
rename(linear = b_cent_IDAQ, quadratic = b_cent_IDAQ2) %>%
pivot_longer(c("linear", "quadratic"),names_to = "varIDAQ") %>%
mutate(network = "pain"))
tom_all %>% bind_rows(., pain_all) %>%
mutate_at(vars(-value),as.factor) %>%
rename(predictor = varIDAQ) %>%
ggplot(., aes(value, fill = predictor)) +
geom_density(alpha = .5)+
geom_vline(xintercept = 0, color = "black", linetype = 2) +
ggtitle(paste("Posterior distributions across the two networks")) +
facet_wrap(~network) +
coord_fixed(ratio = 1/40) +
theme_classic(base_size = 16) +
scale_fill_manual(values=c("#0072B2", "#D55E00"))+
theme(strip.background = element_blank())
library("easystats")
hdi_rope <- function(model){
rope <- rope_range(model)
rope_model <- equivalence_test(model, range = rope, ci = 0.95)
#rope_model %>% mutate(roi = as.character(model$file))
plot(rope_model) + theme_classic() +
ggtitle(deparse(substitute(model))) +
xlim(-0.15,0.15) +
theme(axis.title.y = element_blank(), axis.title.x = element_blank())
#scale_fill_manual(values=c("#0072B2", "#D55E00"))
}
hdi_rope(tom) + hdi_rope(pain) + plot_layout(guides = 'collect')
(hdi_rope(rtpj) / hdi_rope(ltpj) / hdi_rope(prec)) &
(hdi_rope(vmpfc) / hdi_rope(mmpfc) / hdi_rope(dmpfc)) + plot_layout(guides = 'collect')
(hdi_rope(rtpj) | hdi_rope(ltpj)) /
(hdi_rope(prec) | hdi_rope(vmpfc)) /
(hdi_rope(mmpfc) | hdi_rope(dmpfc)) + plot_layout(guides = 'collect')
##10.1 Theory-of-Mind network Update the models by removing the 16 participants that completed a different version of the IDAQ (1-10). We create a function to do this:
update_model <- function(model_old, region){
model_new <- update(model_old, newdata = DF.roi %>% filter(roi == region & !sub %in% sub_ex), seed = 41)
m1 <- posterior_summary(model_old)[c("b_cent_IDAQ", "b_cent_IDAQ2") , "Estimate"]
m2 <- posterior_summary(model_new)[c("b_cent_IDAQ", "b_cent_IDAQ2") , "Estimate"]
tibble(bias = round(100*((m1-m2)/m2), 2), predictor = c("b_cent_IDAQ", "b_cent_IDAQ2"))
}
Calculate the difference (bias) per ToM region:
tab_model(
rtpj_bias,
show.se = TRUE,
#collapse.se = TRUE,
#pred.labels = c("Intercept", "linear predictor", "quadratic predictor"),
#dv.labels = c("rtpj", "ltpj","prec","vmpfc","mmpfc","dmpfc"),
show.obs=FALSE
)
Error in model_info.data.frame(model) :
A data frame is no valid object for this function
##10.1 Pain Matrix Calculate the difference (bias) per Pain Matrix region:
rs2_bias <- update_model(rtpj, "rs2")
Error: The following variables are missing in 'data':
'cent_contrast', 'cent_IDAQ', 'cent_IDAQ2'
sessionInfo()